From 1 - 7 / 7
  • The Risk Assessment and Decision Making project has developed a proof of concept tool with the aim of providing critical fire planning information to emergency services, government and the public. The Fire Impact and Risk Evaluation Decision Support Tool (FireDST) is an advanced software program that can be used to understand the potential impacts a bushfire may have on community assets, infrastructure and people. FireDST demonstrates the ability to predict the probabilities of both neighbourhood and house loss, as well the potential health impacts of bushfire smoke and the areas that are likely to be affected by a bushfire. This Fire Note reports on the Risk assessment and decision making project, conducted under the Bushfire CRC Understanding risk program. This project builds on the understanding developed during two projects from the first phase of the Bushfire CRC (2003-2010). The Bushfire management business model project, developed the PHOENIX RapidFire model for simulating bushfires (see Breakout Box 1), whilst the Building and occupant protection project informed the current understanding of a building's vulnerability to bushfire.

  • Australia is a country of diverse communities and environments. At any time of the year, it is possible to have simultaneous bushfires raging in the west, widespread flooding in the east and tropical cyclones threatening landfall in the north. These natural disasters have a significant impact on Australia’s communities, economy and the environment. Although we cannot prevent natural disasters, having a better understanding of the exposure to these events can inform more effective prevention, preparedness, response and recovery (PPRR) decision-making across all levels of government. Exposure refers to the elements at risk from or natural and man-made hazard events. Knowing who and what is at risk is imperative for the role of Emergency Management Australia (EMA), within the Attorney General’s Department, to administer the Australian Government's financial assistance for response and recovery during major natural hazard events. Lacking spatial expertise, EMA commissioned Geoscience Australia (GA) to enhance their event reporting with improved situational awareness mapping. The aim was to support their decision-making process with innovative, timely and efficient access to fundamental nationally-consistent spatial data and disaster event information. GA addressed this requirement by designing an Exposure Report – a streamlined yet detailed snapshot of exposure information for any area of interest across Australia. The Exposure Report is generated by consolidating a range of national fundamental datasets to extract relevant attributes and present the information in a timely, concise and easily accessible report. The automated process quickly aggregates information for a variety of standard administrative boundaries or hazard-specific footprints. It includes important exposure information such as estimated population and demographic indicators, buildings, business and infrastructure asset counts, reconstruction costs, and identifies agricultural areas, commodities and their value. The customised report provides the information EMA requires in a way that can be readily accessed and interpreted to make timely and informed emergency management decisions. The request and delivery of the report are also integrated into EMA’s incident management system to simplify the coordination, access and accountability between government departments. GA has enhanced the Australian Government’s ability to prioritise response and recovery assistance by improving the access to detailed exposure information in a timely manner. EMA now has ready access to consistent baseline exposure information for any area across Australia, leading to not only better-informed response and recovery but also to planning, preparedness and mitigation initiatives to build more resilient communities.

  • Geoscience Australia (GA) is an agency committed to providing transparent and reproducible information. It is currently implementing an enterprise approach to provenance management using the PROV Data Model as its conceptual base, generic Linked Data tooling and a dedicated provenance store for provenance information management and toolkits integrated into business process' software, or stand-alone interfaces, for provenance information capture, as recommended in a 2015 plan. Business Analytics have been conducted for several agency processes identifying provenance requirements, solutions for which are now being implemented. Preliminary provenance information from these systems has been collected.\\ Mapping processes to PROV and reporting integration has been straightforward however we've encountered difficulties in enabling Linked Data access to repositories and registries holding the processes' data and metadata due to their heterogeneity and existing perceptions about data stewardship. Such access to source data from scientific products via process' representations in PROV are required for reproducibility.

  • A Linked Data API, using Python Flask which is an HTTP API framework, used to deliver representations of GA's aerial Surveys online as Linked Data. The API reads data from another HTTP API: the ARGUS XML API. The ARGUS XML API is generated by Oracle software and delivers XML representations of Survey objects stored in the ARGUS database. The online endpoints for the ARGUS XML API as accessed by this Surveys API are given in a config file within this API's code. Details about how to use this API are given within the main README file of this API's code repository (README.md).

  • Geoscience Australia (GA) has adopted Data Stewardship as an organisation-wide initiative to improve the way we manage and share our data. Although the theory behind data stewardship is well-defined and accepted and the benefits are generally well-understood, practical implementation requires an organisation to prepare for a long-term commitment of resources, both financial and human. Over the past four years, GA has undertaken strategic activities that prepare us for Data Stewardship. GA is now moving towards Data Stewardship as an operational capability and culture within the Agency.

  • This Agreements ontology is designed to model 'agreements' which are social contracts that include: licenses, laws, contracts, Memoranda of Understanding, standards and definitional metadata. Its purpose is to support data sharing by making explicit the relationships between agreements and data and agreements and Agents (people and organisations). Eventually it will also help with the interplay between different classes of agreements. We think of this ontology as a 'middle' ontology, that is one which specializes well-known, abstract, upper ontologies and is able to be used fairly widely but is expected to be used particular contexts in conjunction with detailed, domain-specific, lower ontologies. We have tried to rely on: existing agent, data manipulation, metadata and licence ontologies where possible. As such we specialise the ORG and FOAF ontologies; the PROV ontology; the Dublin Core Terms RDF schema & DCAT ontology; and the ODRS vocabulary & Creative Commons RDF data models for those areas, respectively

  • The National Computational Infrastructure (NCI) at the Australian National University (ANU) has organised a priority set of 30+ large volume national earth and environmental data assets on a High Performance Data (HPD) node within a High Performance Computing (HPC) facility, as a special node under the Australian Government's National Collaborative Research Infrastructure Strategy (NCRIS) Research Data Storage Infrastructure (RDSI) project. The Australian National Geophysical Collection was identified as a nationally significant collection and approved as one of the RDSI funded collections. It includes the most comprehensive publicly available collections of Australian airborne magnetic, gamma-ray, seismic, electromagnetic, magnetotelluric and gravity data sets. The total size allocated for this geophysical data collection is currently 300Terabytes. Organising this major geophysical data collection within a high performance computing environment creates a new capacity for accessing and processing data at both high resolution, and at full-continent spatial extent. Further by co-locating and harmonising the geophysical data assets with other significant national digital data collections (e.g., earth observation, geodesy, digital elevation, bathymetry) new opportunities have arisen for Data-Intensive interdisciplinary science at a scale and resolution not hitherto possible. To support this integrated HPC/HPD infrastructure our data management practices include co-development of Data Management Plans (DMP) with the data collection custodians; the development of standards compliant catalogues on data collections/data sets; and minting and maintaining persistent identifiers. The data are accessible either via direct access or via international standards compliant data services including geospatial standard (ISO 19115) catalogues, metadata harvesting protocols (OAI-PMH) and OGC protocols. A Virtual Geophysics Laboratory has also been established that links the geophysical data assets with online software and tools using cloud based scientific workflows.